New Times,
New Thinking.

Advertorial feature by Big Innovation Centre
  1. Spotlight on Policy
23 November 2020updated 09 Sep 2021 12:35pm

Waiting for a clear picture on facial recognition policy

New technology needs a robust regulatory framework to thrive

By Professor Birgitte Andersen

F acial recognition is changing e-commerce so quickly that in parts of the world you can now pay with your face. In 2018, an Alibaba fintech affiliate began trialling a system called Smile to Pay. Google and Amazon have begun to explore similar technologies, but privacy concerns in the United States and the European Union have held back its development.

There are cameras everywhere – on our streets, transport, where you work, in our homes and on your computer and smartphone. The technology now allows cameras to recognise your face, and even your emotions or presumed state of mind.

Facial recognition is convenient for unlocking your mobile phone using its inbuilt camera, and fun for digital games where children apply face filter technologies. But when applied to policing, on public transport and football grounds, or in shops for marketing purposes, we are in an entirely different situation with new opportunities and challenges.

The United Kingdom’s dilemma

Facial recognition technology is now already being developed and used in major economic centres and regions around the globe. These places are building accurate technologies from collecting billions of pieces of data. If we in the UK stop the use of facial recognition, we will be trapped in the vicious circle of having neither data nor research and development incentives for our developers to innovate real, world-class, high-spec and safe solutions. So, the UK’s entrepreneurs will fall behind, and once regulation is in place our foreign competitors will have superior solutions. But if we go ahead and use facial recognition now, our regulatory system is not wellplaced to safeguard citizens, as we do not have enough data collected to train our cameras for safe use.

To turn this vicious circle into a virtuous cycle, UK policymakers must be decisive on the “purposeful use” and limits of such facial recognition technologies, and the underpinning “data governance”. Big Innovation Centre is hosting the UK debate around this and other emerging technologies such as AI, blockchain and digital health.

Public and private institutions and businesses must be transparent about where facial and emotional recognition technologies are deployed and for what purpose, what kind of data they collect, and how they are processed and stored. Transparency is paramount.

Give a gift subscription to the New Statesman this Christmas from just £49

Data protection and antidiscrimination must be guaranteed in deploying facial and emotional recognition technologies. These must be used in such a way that protects citizens’ privacy and does not reinforce societal prejudices or exploit vulnerable groups and individuals.

Our policymakers must now decide on a clear and concise regulatory framework to ensure fair and safe use of facial recognition technologies. Developers and those buying these technologies need to know the agreed standards to secure their investments into our future infrastructure.

Professor Birgitte Andersen is CEO of Big Innovation Centre and secretariat for the All-Party Parliamentary Groups on Artificial Intelligence and Blockchain. You can follow her on Twitter @BirgitteBIC, and BIC’s campaign at the UK Political Party Conference, “Will Face & Emotion Recognition Change the UK?” @BigInnovCentre